33 research outputs found

    Image Reconstruction from Undersampled Confocal Microscopy Data using Multiresolution Based Maximum Entropy Regularization

    Full text link
    We consider the problem of reconstructing 2D images from randomly under-sampled confocal microscopy samples. The well known and widely celebrated total variation regularization, which is the L1 norm of derivatives, turns out to be unsuitable for this problem; it is unable to handle both noise and under-sampling together. This issue is linked with the notion of phase transition phenomenon observed in compressive sensing research, which is essentially the break-down of total variation methods, when sampling density gets lower than certain threshold. The severity of this breakdown is determined by the so-called mutual incoherence between the derivative operators and measurement operator. In our problem, the mutual incoherence is low, and hence the total variation regularization gives serious artifacts in the presence of noise even when the sampling density is not very low. There has been very few attempts in developing regularization methods that perform better than total variation regularization for this problem. We develop a multi-resolution based regularization method that is adaptive to image structure. In our approach, the desired reconstruction is formulated as a series of coarse-to-fine multi-resolution reconstructions; for reconstruction at each level, the regularization is constructed to be adaptive to the image structure, where the information for adaption is obtained from the reconstruction obtained at coarser resolution level. This adaptation is achieved by using maximum entropy principle, where the required adaptive regularization is determined as the maximizer of entropy subject to the information extracted from the coarse reconstruction as constraints. We demonstrate the superiority of the proposed regularization method over existing ones using several reconstruction examples

    Non-convex regularization based on shrinkage penalty function

    Full text link
    Total Variation regularization (TV) is a seminal approach for image recovery. TV involves the norm of the image's gradient, aggregated over all pixel locations. Therefore, TV leads to piece-wise constant solutions, resulting in what is known as the "staircase effect." To mitigate this effect, the Hessian Schatten norm regularization (HSN) employs second-order derivatives, represented by the pth norm of eigenvalues in the image hessian, summed across all pixels. HSN demonstrates superior structure-preserving properties compared to TV. However, HSN solutions tend to be overly smoothed. To address this, we introduce a non-convex shrinkage penalty applied to the Hessian's eigenvalues, deviating from the convex lp norm. It is important to note that the shrinkage penalty is not defined directly in closed form, but specified indirectly through its proximal operation. This makes constructing a provably convergent algorithm difficult as the singular values are also defined through a non-linear operation. However, we were able to derive a provably convergent algorithm using proximal operations. We prove the convergence by establishing that the proposed regularization adheres to restricted proximal regularity. The images recovered by this regularization were sharper than the convex counterparts.Comment: version

    Photo-acoustic tomographic image reconstruction from reduced data using physically inspired regularization

    Full text link
    We propose a model-based image reconstruction method for photoacoustic tomography(PAT) involving a novel form of regularization and demonstrate its ability to recover good quality images from significantly reduced size datasets. The regularization is constructed to suit the physical structure of typical PAT images. We construct it by combining second-order derivatives and intensity into a non-convex form to exploit a structural property of PAT images that we observe: in PAT images, high intensities and high second-order derivatives are jointly sparse. The specific form of regularization constructed here is a modification of the form proposed for fluorescence image restoration. This regularization is combined with a data fidelity cost, and the required image is obtained as the minimizer of this cost. As this regularization is non-convex, the efficiency of the minimization method is crucial in obtaining artifact-free reconstructions. We develop a custom minimization method for efficiently handling this non-convex minimization problem. Further, as non-convex minimization requires a large number of iterations and the PAT forward model in the data-fidelity term has to be applied in the iterations, we propose a computational structure for efficient implementation of the forward model with reduced memory requirements. We evaluate the proposed method on both simulated and real measured data sets and compare them with a recent reconstruction method that is based on a well-known fast iterative shrinkage threshold algorithm (FISTA).Comment: This manuscript has been published in Journal of Instrumentatio

    Variational reconstruction of scalar and vector images from non-uniform samples

    No full text
    We address the problem of reconstructing scalar and vector functions from non-uniform data. The reconstruction problem is formulated as a minimization problem where the cost is a weighted sum of two terms. The first data term is the quadratic measure of goodness of fit, whereas the second regularization term is a smoothness functional. We concentrate on the case where the later is a semi-norm involving differential operators. We are interested in a solution that is invariant with respect to scaling and rotation of the input data. We first show that this is achieved whenever the smoothness functional is both scale- and rotation-invariant. In the first part of the thesis, we address the scalar problem. An elegant solution having the above mentioned invariant properties is provided by Duchon's method of thin-plate splines. Unfortunately, the solution involves radial basis functions that are poorly conditioned and becomes impractical when the number of samples is large. We propose a computationally efficient alternative where the minimization is carried out within the space of uniform B-splines. We show how the B-spline coefficients of the solution can be obtained by solving a well-conditioned, sparse linear system of equations. By taking advantage of the refinable nature of B-splines, we devise a fast multiresolution-multigrid algorithm. We demonstrate the effectiveness of this method in the context of image processing. Next, we consider the reconstruction of vector functions from projected samples, meaning that the input data do not contain the full vector values, but only some directional components. We first define the rotational invariance and the scale invariance of a vector smoothness functional, and then characterize the complete family of such functionals. We show that such a functional is composed of a weighted sum of two sub-functionals: (i) Duchon's scalar semi-norm applied on the divergence field; (ii) and the same applied to each component of the rotational field. This forms a three-parameter family, where the first two are the Duchon's order of the above sub-functionals, and the third is their relative weight. Our family is general enough to include all vector spline formulations that have been proposed so far. We provide the analytical solution for this minimization problem and show that the solution can be expressed as a weighted sum of vector basis functions, which we call the generalized vector splines. We construct the linear system of equations that yields the required weights. As in the scalar case, we also provide an alternative B-spline solution for this problem, and propose a fast multigrid algorithm. Finally, we apply our vector field reconstruction method to cardiac motion field recovery from ultrasound pulsed wave Doppler data, and demonstrate its clinical potential
    corecore